Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43
Filtrar
1.
Comput Methods Programs Biomed ; 244: 107943, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38042693

RESUMO

BACKGROUND AND OBJECTIVE: In type 1 diabetes (T1D), a quantitative evaluation of the impact on hypoglycemia of suboptimal therapeutic decision (e.g. incorrect estimation of the ingested carbohydrates, inaccurate insulin timing, etc) is unavailable. Clinical trials to measure sensitivity to patient actions would be expensive, exposed to confounding factors and risky for the participants. In this work, a T1D patient decision simulator (T1D-PDS), realistically reproducing blood glucose dynamics in a large virtual population, is used to perform extensive in-silico trials and the so-derived data employed to implement a sensitivity analysis that ranks different behavioral factors based on their impact on a clinically meaningful parameter, the time below range (TBR). METHODS: Eleven behavioral factors impacting on hypoglycemia are considered. The T1D-PDS was used to perform multiple 2-week simulations involving 100 adults, by testing about 3500 different perturbations for nominal behavior. A local linear approximation of the function linking the TBR and the factors were computed to derive sensitivity indices (SIs), quantifying the impact of each factor on TBR variations. RESULTS: The obtained ranking quantifies importance of factors w.r.t. the others. Factors apparently related to hypoglycemia were correctly placed on the top of the ranking, including systematic (SI=2.05%) and random (SI=1.35%) carb-counting error, hypotreatment dose (SI=-1.21%), insulin bolus time w.r.t. mealtime (SI=1.09%). CONCLUSIONS: The obtained SIs allowed to rank behavioral factors based on their impact on TBR. The behavioral factors identified as most influential can be prioritized in patient training.


Assuntos
Diabetes Mellitus Tipo 1 , Hipoglicemia , Adulto , Humanos , Diabetes Mellitus Tipo 1/tratamento farmacológico , Hipoglicemiantes , Automonitorização da Glicemia , Hipoglicemia/tratamento farmacológico , Insulina , Glicemia
2.
Front Bioeng Biotechnol ; 11: 1280233, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38076424

RESUMO

Introduction: The retrospective analysis of continuous glucose monitoring (CGM) timeseries can be hampered by colored and non-stationary measurement noise. Here, we introduce a Bayesian denoising (BD) algorithm to address both autocorrelation of measurement noise and temporal variability of its variance. Methods: BD utilizes adaptive, a-priori models of signal and noise, whose unknown variances are derived on partially-overlapped CGM windows, via smoothing approach based on linear mean square estimation. The CGM signal and noise variability profiles are then reconstructed using a kernel smoother. BD is first assessed on two simulated datasets, DS1 and DS2. On DS1, the effectiveness of accounting for colored noise is evaluated by comparison against a literature algorithm; on DS2, the effectiveness of accounting for the noise variance temporal variability is evaluated by comparison against a Butterworth filter. BD is then evaluated on 15 CGM timeseries measured by the Dexcom G6 (DR). Results: On DS1, BD allows reducing the root-mean-square-error (RMSE) from 8.10 [6.79-9.24] mg/dL to 6.28 [5.47-7.27] mg/dL (median [IQR]); on DS2, RMSE decreases from 6.85 [5.50-8.72] mg/dL to 5.35 [4.48-6.49] mg/dL. On DR, BD performs a reasonable tracking of noise variance variability and a satisfactory denoising. Discussion: The new algorithm effectively addresses the nature of CGM measurement error, outperforming existing denoising algorithms.

3.
BMC Med Inform Decis Mak ; 23(1): 253, 2023 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-37940954

RESUMO

BACKGROUND: The ageing global population presents significant public health challenges, especially in relation to the subjective wellbeing of the elderly. In this study, our aim was to investigate the potential for developing a model to forecast the two-year variation of the perceived wellbeing of individuals aged over 50. We also aimed to identify the variables that predict changes in subjective wellbeing, as measured by the CASP-12 scale, over a two-year period. METHODS: Data from the European SHARE project were used, specifically the demographic, health, social and financial variables of 9422 subjects. The subjective wellbeing was measured through the CASP-12 scale. The study outcome was defined as binary, i.e., worsening/not worsening of the variation of CASP-12 in 2 years. Logistic regression, logistic regression with LASSO regularisation, and random forest were considered candidate models. Performance was assessed in terms of accuracy in correctly predicting the outcome, Area Under the Curve (AUC), and F1 score. RESULTS: The best-performing model was the random forest, achieving an accuracy of 65%, AUC = 0.659, and F1 = 0.710. All models proved to be able to generalise both across subjects and over time. The most predictive variables were the CASP-12 score at baseline, the presence of depression and financial difficulties. CONCLUSIONS: While we identify the random forest model as the more suitable, given the similarity of performance, the models based on logistic regression or on logistic regression with LASSO regularisation are also possible options.


Assuntos
Envelhecimento , Aprendizado de Máquina , Humanos , Idoso , Pessoa de Meia-Idade , Previsões , Modelos Logísticos , Algoritmo Florestas Aleatórias
4.
Comput Methods Programs Biomed ; 240: 107700, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37437469

RESUMO

BACKGROUND AND OBJECTIVE: Continuous glucose monitoring (CGM) sensors measure interstitial glucose concentration every 1-5 min for days or weeks. New CGM-based diabetes therapies are often tested in in silico clinical trials (ISCTs) using diabetes simulators. Accurate models of CGM sensor inaccuracies and failures could help improve the realism of ISCTs. However, the modeling of CGM failures has not yet been fully addressed in the literature. This work aims to develop a mathematical model of CGM gaps, i.e., occasional portions of missing data generated by temporary sensor errors (e.g., excessive noise or artifacts). METHODS: Two datasets containing CGM traces collected in 167 adults and 205 children, respectively, using the Dexcom G6 sensor (Dexcom Inc., San Diego, CA) were used. Four Markov models, of increasing complexity, were designed to describe three main characteristics: number of gaps for each sensor, gap distribution in the monitoring days, and gap duration. Each model was identified on a portion of each dataset (training set). The remaining portion of each dataset (real test set) was used to evaluate model performance through a Monte Carlo simulation approach. Each model was used to generate 100 simulated test sets with the same size as the real test set. The distributions of gap characteristics on the simulated test sets were compared with those observed on the real test set, using the two-sample Kolmogorov-Smirnov test and the Jensen-Shannon divergence. RESULTS: A six-state Markov model, having two states to describe normal sensor operation and four states to describe gap occurrence, achieved the best results. For this model, the Kolmogorov-Smirnov test found no significant differences between the distribution of simulated and real gap characteristics. Moreover, this model obtained significantly lower Jensen-Shannon divergence values than the other models. CONCLUSIONS: A Markov model describing CGM gaps was developed and validated on two real datasets. The model describes well the number of gaps for each sensor, the gap distribution over monitoring days, and the gap durations. Such a model can be integrated into existing diabetes simulators to realistically simulate CGM gaps in ISCTs and thus enable the development of more effective and robust diabetes management strategies.


Assuntos
Diabetes Mellitus Tipo 1 , Diabetes Mellitus , Adulto , Criança , Humanos , Glicemia , Automonitorização da Glicemia/métodos , Calibragem , Modelos Teóricos , Diabetes Mellitus Tipo 1/tratamento farmacológico
5.
Artif Intell Med ; 142: 102588, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37316101

RESUMO

BACKGROUND: Amyotrophic Lateral Sclerosis (ALS) is a fatal neurodegenerative disorder characterised by the progressive loss of motor neurons in the brain and spinal cord. The fact that ALS's disease course is highly heterogeneous, and its determinants not fully known, combined with ALS's relatively low prevalence, renders the successful application of artificial intelligence (AI) techniques particularly arduous. OBJECTIVE: This systematic review aims at identifying areas of agreement and unanswered questions regarding two notable applications of AI in ALS, namely the automatic, data-driven stratification of patients according to their phenotype, and the prediction of ALS progression. Differently from previous works, this review is focused on the methodological landscape of AI in ALS. METHODS: We conducted a systematic search of the Scopus and PubMed databases, looking for studies on data-driven stratification methods based on unsupervised techniques resulting in (A) automatic group discovery or (B) a transformation of the feature space allowing patient subgroups to be identified; and for studies on internally or externally validated methods for the prediction of ALS progression. We described the selected studies according to the following characteristics, when applicable: variables used, methodology, splitting criteria and number of groups, prediction outcomes, validation schemes, and metrics. RESULTS: Of the starting 1604 unique reports (2837 combined hits between Scopus and PubMed), 239 were selected for thorough screening, leading to the inclusion of 15 studies on patient stratification, 28 on prediction of ALS progression, and 6 on both stratification and prediction. In terms of variables used, most stratification and prediction studies included demographics and features derived from the ALSFRS or ALSFRS-R scores, which were also the main prediction targets. The most represented stratification methods were K-means, and hierarchical and expectation-maximisation clustering; while random forests, logistic regression, the Cox proportional hazard model, and various flavours of deep learning were the most widely used prediction methods. Predictive model validation was, albeit unexpectedly, quite rarely performed in absolute terms (leading to the exclusion of 78 eligible studies), with the overwhelming majority of included studies resorting to internal validation only. CONCLUSION: This systematic review highlighted a general agreement in terms of input variable selection for both stratification and prediction of ALS progression, and in terms of prediction targets. A striking lack of validated models emerged, as well as a general difficulty in reproducing many published studies, mainly due to the absence of the corresponding parameter lists. While deep learning seems promising for prediction applications, its superiority with respect to traditional methods has not been established; there is, instead, ample room for its application in the subfield of patient stratification. Finally, an open question remains on the role of new environmental and behavioural variables collected via novel, real-time sensors.


Assuntos
Esclerose Lateral Amiotrófica , Humanos , Esclerose Lateral Amiotrófica/diagnóstico , Inteligência Artificial , Encéfalo , Análise por Conglomerados , Bases de Dados Factuais
6.
IEEE Trans Biomed Eng ; 70(11): 3227-3238, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37368794

RESUMO

OBJECTIVE: Design and assessment of new therapies for type 1 diabetes (T1D) management can be greatly facilitated by in silico simulations. The ReplayBG simulation methodology here proposed allows "replaying" the scenario behind data already collected by simulating the glucose concentration obtained in response to alternative insulin/carbohydrate therapies and evaluate their efficacy leveraging the concept of digital twin. METHODS: ReplayBG is based on two steps. First, a personalized model of glucose-insulin dynamics is identified using insulin, carbohydrate, and continuous glucose monitoring (CGM) data. Then, this model is used to simulate the glucose concentration that would have been obtained by "replaying" the same portion of data using a different therapy. The validity of the methodology was evaluated on 100 virtual subjects using the UVa/Padova T1D Simulator (T1DS). In particular, the glucose concentration traces simulated by ReplayBG are compared with those provided by T1DS in five different scenarios of insulin and carbohydrate treatment modifications. Furthermore, we compared ReplayBG with a state-of-the-art methodology for the scope. Finally, two case studies using real data are also presented. RESULTS: ReplayBG simulates with high accuracy the effect of the considered insulin and carbohydrate treatment alterations, performing significantly better than state-of-art method in almost all considered situations. CONCLUSION: ReplayBG proved to be a reliable and robust tool to retrospectively explore the effect of new treatments for T1D on the glucose dynamics. It is freely available as open source software at https://github.com/gcappon/replay-bg. SIGNIFICANCE: ReplayBG offers a new approach to preliminary evaluate new therapies for T1D management before clinical trials.

7.
Diabetes Care ; 46(4): 864-867, 2023 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-36809308

RESUMO

OBJECTIVE: Continuous glucose monitoring (CGM) may be challenged by extreme conditions during cardiac surgery using hypothermic extracorporeal circulation (ECC). RESEARCH DESIGN AND METHODS: We evaluated the Dexcom G6 sensor in 16 subjects undergoing cardiac surgery with hypothermic ECC, of whom 11 received deep hypothermic circulatory arrest (DHCA). Arterial blood glucose, quantified by the Accu-Chek Inform II meter, served as reference. RESULTS: Intrasurgery mean absolute relative difference (MARD) of 256 paired CGM/reference values was 23.8%. MARD was 29.1% during ECC (154 pairs) and 41.6% immediately after DHCA (10 pairs), with a negative bias (signed relative difference: -13.7%, -26.6%, and -41.6%). During surgery, 86.3% pairs were in Clarke error grid zones A or B and 41.0% of sensor readings fulfilled the International Organization for Standardization (ISO) 15197:2013 norm. Postsurgery, MARD was 15.0%. CONCLUSIONS: Cardiac surgery using hypothermic ECC challenges the accuracy of the Dexcom G6 CGM although recovery appears to occur thereafter.


Assuntos
Procedimentos Cirúrgicos Cardíacos , Diabetes Mellitus Tipo 1 , Humanos , Glicemia , Automonitorização da Glicemia , Reprodutibilidade dos Testes
8.
J Diabetes Sci Technol ; 16(6): 1541-1549, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-33978501

RESUMO

BACKGROUND: In the management of type 1 diabetes (T1D), systematic and random errors in carb-counting can have an adverse effect on glycemic control. In this study, we performed an in silico trial aiming at quantifying the impact of different levels of carb-counting error on glycemic control. METHODS: The T1D patient decision simulator was used to simulate 7-day glycemic profiles of 100 adults using open-loop therapy. The simulation was repeated for different values of systematic and random carb-counting errors, generated with Gaussian distribution varying the error mean from -10% to +10% and standard deviation (SD) from 0% to 50%. The effect of the error was evaluated by computing the difference of time inside (∆TIR), above (∆TAR) and below (∆TBR) the target glycemic range (70-180mg/dl) compared to the reference case, that is, absence of error. Finally, 3 linear regression models were developed to mathematically describe how error mean and SD variations result in ∆TIR, ∆TAR, and ∆TBR changes. RESULTS: Random errors globally deteriorate the glycemic control; systematic underestimations lead to, on average, up to 5.2% more TAR than the reference case, while systematic overestimation results in up to 0.8% more TBR. The different time in range metrics were linearly related with error mean and SD (R2>0.95), with slopes of ßMEAN=0.21, ßSD=-0.07 for ∆TIR, ßMEAN=-0.25, ßSD=+0.06 for ∆TAR, and ßMEAN=0.05, ßSD=+0.01 for ∆TBR. CONCLUSIONS: The quantification of carb-counting error impact performed in this work may be useful understanding causes of glycemic variability and the impact of possible therapy adjustments or behavior changes in different glucose metrics.


Assuntos
Diabetes Mellitus Tipo 1 , Adulto , Humanos , Diabetes Mellitus Tipo 1/terapia , Controle Glicêmico , Glicemia , Automonitorização da Glicemia
9.
Diabet Med ; 39(5): e14758, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-34862829

RESUMO

AIMS: Reliable estimation of the time spent in different glycaemic ranges (time-in-ranges) requires sufficiently long continuous glucose monitoring. In a 2019 paper (Battelino et al., Clinical targets for continuous glucose monitoring data interpretation: recommendations from the international consensus on time in range. Diabetes Care. 2019;42:1593-1603), an international panel of experts suggested using a correlation-based approach to obtain the minimum number of days for reliable time-in-ranges estimates. More recently (in Camerlingo et al., Design of clinical trials to assess diabetes treatment: minimum duration of continuous glucose monitoring data to estimate time-in-ranges with the desired precision. Diabetes Obes Metab. 2021;23:2446-2454) we presented a mathematical equation linking the number of monitoring days to the uncertainty around time-in-ranges estimates. In this work, we compare these two approaches, mainly focusing on time spent in (70-180) mg/dL range (TIR). METHODS: The first 100 and 150 days of data were extracted from study A (148 subjects, ~180 days), and the first 100, 150, 200, 250 and 300 days of data from study B (45 subjects, ~365 days). For each of these data windows, the minimum monitoring duration was computed using correlation-based and equation-based approaches. The suggestions were compared for the windows of different durations extracted from the same study, and for the windows of equal duration extracted from different studies. RESULTS: When changing the dataset duration, the correlation-based approach produces inconsistent results, ranging from 23 to 64 days, for TIR. The equation-based approach was found to be robust versus this issue, as it is affected only by the characteristics of the population being monitored. Indeed, to grant a confidence interval of 5% around TIR, it suggests 18 days for windows from study A, and 17 days for windows from study B. Similar considerations hold for other time-in-ranges. CONCLUSIONS: The equation-based approach offers advantages for the design of clinical trials having time-in-ranges as final end points, with focus on trial duration.


Assuntos
Automonitorização da Glicemia , Diabetes Mellitus Tipo 1 , Glicemia , Automonitorização da Glicemia/métodos , Diabetes Mellitus Tipo 1/tratamento farmacológico , Humanos , Fatores de Tempo
10.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 4379-4382, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892190

RESUMO

Continuous glucose monitoring (CGM) sensors are minimally-invasive sensors used in diabetes therapy to monitor interstitial glucose concentration. The measurements are collected almost continuously (e.g. every 5 min) and permit the detection of dangerous hypo/hyperglycemic episodes. Modeling the various error components affecting CGM sensors is very important (e.g., to generate realistic scenarios for developing and testing CGM-based applications in type 1 diabetes simulators). In this work we focus on data gaps, which are portions of missing data due to a disconnection or a temporary sensor error. A dataset of 167 adults monitored with the Dexcom (San Diego, CA) G6 sensor is considered. After the evaluation of some statistics (the number of gaps for each sensor, the gap distribution over the monitoring days and the data gap durations), we develop a two-state Markov model to describe such statistics about data gap occurrence. Statistics about data gaps are compared between real data and simulated data generated by the model with a Monte Carlo simulation. Results show that the model describes quite accurately the occurrence and the duration of data gaps observed in real data.


Assuntos
Automonitorização da Glicemia , Diabetes Mellitus Tipo 1 , Adulto , Glicemia , Simulação por Computador , Humanos , Método de Monte Carlo
11.
Diabetes Obes Metab ; 23(11): 2446-2454, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34212483

RESUMO

AIM: To compute the uncertainty of time-in-ranges, such as time in range (TIR), time in tight range (TITR), time below range (TBR) and time above range (TAR), to evaluate glucose control and to determine the minimum duration of a trial to achieve the desired precision. MATERIALS AND METHODS: Four formulas for the aforementioned time-in-ranges were obtained by estimating the equation's parameters on a training set extracted from study A (226 subjects, ~180 days, 5-minute Dexcom G4 Platinum sensor). The formulas were then validated on the remaining data. We also illustrate how to adjust the parameters for sensors with different sampling rates. Finally, we used study B (45 subjects, ~365 days, 15-minute Abbott Freestyle Libre sensor) to further validate our results. RESULTS: Our approach was effective in predicting the uncertainty when time-in-ranges are estimated using n days of continuous glucose monitoring (CGM), matching the variability observed in the data. As an example, monitoring a population with TIR = 70%, TITR = 50%, TBR = 5% and TAR = 25% for 30 days warrants a precision of ±3.50%, ±3.68%, ±1.33% and ±3.66%, respectively. CONCLUSIONS: The presented approach can be used to both compute the uncertainty of time-in-ranges and determine the minimum duration of a trial to achieve the desired precision. An online tool to facilitate its implementation is made freely available to the clinical investigator.


Assuntos
Glicemia , Diabetes Mellitus Tipo 1 , Automonitorização da Glicemia , Diabetes Mellitus Tipo 1/tratamento farmacológico , Humanos , Fatores de Tempo
12.
Diabetes Metab Res Rev ; 37(7): e3449, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33763974

RESUMO

The main objective of diabetes control is to correct hyperglycaemia while avoiding hypoglycaemia, especially in insulin-treated patients. Fear of hypoglycaemia is a hurdle to effective correction of hyperglycaemia because it promotes under-dosing of insulin. Strategies to minimise hypoglycaemia include education and training for improved hypoglycaemia awareness and the development of technologies to allow their early detection and thus minimise their occurrence. Patients with impaired hypoglycaemia awareness would benefit the most from these technologies. The purpose of this systematic review is to review currently available or in-development technologies that support detection of hypoglycaemia or hypoglycaemia risk, and identify gaps in the research. Nanomaterial use in sensors is a promising strategy to increase the accuracy of continuous glucose monitoring devices for low glucose values. Hypoglycaemia is associated with changes on vital signs, so electrocardiogram and encephalogram could also be used to detect hypoglycaemia. Accuracy improvements through multivariable measures can make already marketed galvanic skin response devices a good noninvasive alternative. Breath volatile organic compounds can be detected by dogs and devices and alert patients at hypoglycaemia onset, while near-infrared spectroscopy can also be used as a hypoglycaemia alarms. Finally, one of the main directions of research are deep learning algorithms to analyse continuous glucose monitoring data and provide earlier and more accurate prediction of hypoglycaemia. Current developments for early identification of hypoglycaemia risk combine improvements of available 'needle-type' enzymatic glucose sensors and noninvasive alternatives. Patient usability will be essential to demonstrate to allow their implementation for daily use in diabetes management.


Assuntos
Diabetes Mellitus Tipo 1 , Hipoglicemia , Animais , Glicemia , Automonitorização da Glicemia/métodos , Diabetes Mellitus Tipo 1/complicações , Cães , Humanos , Hipoglicemia/induzido quimicamente , Hipoglicemia/diagnóstico , Hipoglicemia/prevenção & controle , Hipoglicemiantes/uso terapêutico , Insulina/uso terapêutico , Sistemas de Infusão de Insulina
13.
Sensors (Basel) ; 21(5)2021 Feb 27.
Artigo em Inglês | MEDLINE | ID: mdl-33673415

RESUMO

In type 1 diabetes management, the availability of algorithms capable of accurately forecasting future blood glucose (BG) concentrations and hypoglycemic episodes could enable proactive therapeutic actions, e.g., the consumption of carbohydrates to mitigate, or even avoid, an impending critical event. The only input of this kind of algorithm is often continuous glucose monitoring (CGM) sensor data, because other signals (such as injected insulin, ingested carbs, and physical activity) are frequently unavailable. Several predictive algorithms fed by CGM data only have been proposed in the literature, but they were assessed using datasets originated by different experimental protocols, making a comparison of their relative merits difficult. The aim of the present work was to perform a head-to-head comparison of thirty different linear and nonlinear predictive algorithms using the same dataset, given by 124 CGM traces collected over 10 days with the newest Dexcom G6 sensor available on the market and considering a 30-min prediction horizon. We considered the state-of-the art methods, investigating, in particular, linear black-box methods (autoregressive; autoregressive moving-average; and autoregressive integrated moving-average, ARIMA) and nonlinear machine-learning methods (support vector regression, SVR; regression random forest; feed-forward neural network, fNN; and long short-term memory neural network). For each method, the prediction accuracy and hypoglycemia detection capabilities were assessed using either population or individualized model parameters. As far as prediction accuracy is concerned, the results show that the best linear algorithm (individualized ARIMA) provides accuracy comparable to that of the best nonlinear algorithm (individualized fNN), with root mean square errors of 22.15 and 21.52 mg/dL, respectively. As far as hypoglycemia detection is concerned, the best linear algorithm (individualized ARIMA) provided precision = 64%, recall = 82%, and one false alarm/day, comparable to the best nonlinear technique (population SVR): precision = 63%, recall = 69%, and 0.5 false alarms/day. In general, the head-to-head comparison of the thirty algorithms fed by CGM data only made using a wide dataset shows that individualized linear models are more effective than population ones, while no significant advantages seem to emerge when employing nonlinear methodologies.


Assuntos
Automonitorização da Glicemia , Glicemia/análise , Hipoglicemia , Algoritmos , Humanos , Hipoglicemia/diagnóstico
14.
Nutr Metab Cardiovasc Dis ; 31(2): 650-657, 2021 02 08.
Artigo em Inglês | MEDLINE | ID: mdl-33594987

RESUMO

BACKGROUND AND AIMS: Continuous glucose monitoring improves glycemic control in diabetes. This study compared the accuracy of the Dexcom G5 Mobile (Dexcom, San Diego, CA) transcutaneous sensor (DG5) and the first version of Eversense (Senseonics,Inc., Germantown, MD) implantable sensor (EVS). METHODS AND RESULTS: Subjects with type 1 diabetes (T1D) and using EVS wore simultaneously DG5 for seven days. At day 3, patients were admitted to a clinical research center (CRC) to receive breakfast with delayed and increased insulin bolus to induce glucose excursions. At CRC, venous glucose was monitored every 15 min (or 5 min during hypoglycemia) for 6 h by YSI 2300 STAT PLUS™ glucose and lactate analyzer. At home patients were requested to perform 4 fingerstick glucose measurements per day. Eleven patients (9 males, age 47.4 ± 11.3 years, M±SD) were enrolled. During home-stay the median [25th-75th percentile] absolute relative difference (ARD) over all CGM-fingerstick matched-pairs was 11.64% [5.38-20.65]% for the DG5 and 10.75% [5.15-19.74]% for the EVS (p-value = 0.58). At CRC, considering all the CGM-YSI matched-pairs, the DG5 showed overall smaller median ARD than EVS, 7.91% [4.14-14.30]% vs 11.4% [5.04-18.54]% (p-value<0.001). Considering accuracy during blood glucose swings, DG5 performed better than EVS when glucose rate-of-change was -0.5 to -1.5 mg/dL/min, with median ARD of 7.34% [3.71-12.76]% vs 13.59% [4.53-20.78]% (p-value<0.001), and for rate-of-change < -1.5 mg/dl/min, with median ARD of 5.23% [2.09-15.29]% vs 12.73% [4.14-20.82]% (p-value = 0.02). CONCLUSIONS: DG5 was more accurate than EVS at CRC, especially when glucose decreased. No differences were found at home.


Assuntos
Automonitorização da Glicemia/instrumentação , Glicemia/metabolismo , Diabetes Mellitus Tipo 1/diagnóstico , Transdutores , Tecnologia sem Fio/instrumentação , Adulto , Biomarcadores/sangue , Glicemia/efeitos dos fármacos , Diabetes Mellitus Tipo 1/sangue , Diabetes Mellitus Tipo 1/terapia , Desenho de Equipamento , Feminino , Controle Glicêmico , Humanos , Hipoglicemiantes/uso terapêutico , Insulina/uso terapêutico , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Reprodutibilidade dos Testes , Fatores de Tempo , Resultado do Tratamento
15.
J Diabetes Sci Technol ; 15(2): 346-359, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-32940087

RESUMO

BACKGROUND: In type 1 diabetes (T1D) research, in-silico clinical trials (ISCTs) have proven effective in accelerating the development of new therapies. However, published simulators lack a realistic description of some aspects of patient lifestyle which can remarkably affect glucose control. In this paper, we develop a mathematical description of meal carbohydrates (CHO) amount and timing, with the aim to improve the meal generation module in the T1D Patient Decision Simulator (T1D-PDS) published in Vettoretti et al. METHODS: Data of 32 T1D subjects under free-living conditions for 4874 days were used. Univariate probability density function (PDF) parametric models with different candidate shapes were fitted, individually, against sample distributions of: CHO amounts of breakfast (CHOB), lunch (CHOL), dinner (CHOD), and snack (CHOS); breakfast timing (TB); and time between breakfast-lunch (TBL) and between lunch-dinner (TLD). Furthermore, a support vector machine (SVM) classifier was developed to predict the occurrence of a snack in future fixed-length time windows. Once embedded inside the T1D-PDS, an ISCT was performed. RESULTS: Resulting PDF models were: gamma (CHOB, CHOS), lognormal (CHOL, TB), loglogistic (CHOD), and generalized-extreme-values (TBL, TLD). The SVM showed a classification accuracy of 0.8 over the test set. The distributions of simulated meal data were not statistically different from the distributions of the real data used to develop the models (α = 0.05). CONCLUSIONS: The models of meal amount and timing variability developed are suitable for describing real data. Their inclusion in modules that describe patient behavior in the T1D-PDS can permit investigators to perform more realistic, reliable, and insightful ISCTs.


Assuntos
Diabetes Mellitus Tipo 1 , Glicemia , Desjejum , Humanos , Insulina , Refeições , Modelos Teóricos
16.
IEEE Trans Biomed Eng ; 68(1): 247-255, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32746033

RESUMO

OBJECTIVE: This paper aims at proposing a new machine-learning based model to improve the calculation of mealtime insulin boluses (MIB) in type 1 diabetes (T1D) therapy using continuous glucose monitoring (CGM) data. Indeed, MIB is still often computed through the standard formula (SF), which does not account for glucose rate-of-change ( ∆G), causing critical hypo/hyperglycemic episodes. METHODS: Four candidate models for MIB calculation, based on multiple linear regression (MLR) and least absolute shrinkage and selection operator (LASSO) are developed. The proposed models are assessed in silico, using the UVa/Padova T1D simulator, in different mealtime scenarios and compared to the SF and three ∆G-accounting variants proposed in the literature. An assessment on real data, by retrospectively analyzing 218 glycemic traces, is also performed. RESULTS: All four tested models performed better than the existing techniques. LASSO regression with extended feature-set including quadratic terms (LASSO Q) produced the best results. In silico, LASSO Q reduced the error in estimating the optimal bolus to only 0.86 U (1.45 U of SF and 1.36-1.44 U of literature methods), as well as hypoglycemia incidence (from 44.41% of SF and 44.60-45.01% of literature methods, to 35.93%). Results are confirmed by the retrospective application to real data. CONCLUSION: New models to improve MIB calculation accounting for CGM- ∆G and easy-to-measure features can be developed within a machine learning framework. Particularly, in this paper, a new LASSO Q model was developed, which ensures better glycemic control than SF and other literature methods. SIGNIFICANCE: MIB dosage with the proposed LASSO Q model can potentially reduce the risk of adverse events in T1D therapy.


Assuntos
Diabetes Mellitus Tipo 1 , Glicemia , Automonitorização da Glicemia , Diabetes Mellitus Tipo 1/tratamento farmacológico , Humanos , Hipoglicemiantes/uso terapêutico , Insulina/uso terapêutico , Sistemas de Infusão de Insulina , Aprendizado de Máquina , Estudos Retrospectivos
17.
Sci Rep ; 10(1): 18180, 2020 10 23.
Artigo em Inglês | MEDLINE | ID: mdl-33097760

RESUMO

Diabetes is a chronic metabolic disease that causes blood glucose (BG) concentration to make dangerous excursions outside its physiological range. Measuring the fraction of time spent by BG outside this range, and, specifically, the time-below-range (TBR), is a clinically common way to quantify the effectiveness of therapies. TBR is estimated from data recorded by continuous glucose monitoring (CGM) sensors, but the duration of CGM recording guaranteeing a reliable indicator is under debate in the literature. Here we framed the problem as random variable estimation problem and studied the convergence of the estimator, deriving a formula that links the TBR estimation error variance with the CGM recording length. Validation is performed on CGM data of 148 subjects with type-1-diabetes. First, we show the ability of the formula to predict the uncertainty of the TBR estimate in a single patient, using patient-specific parameters; then, we prove its applicability on population data, without the need of parameters individualization. The approach can be straightforwardly extended to other similar metrics, such as time-in-range and time-above-range, widely adopted by clinicians. This strengthens its potential utility in diabetes research, e.g., in the design of those clinical trials where minimal CGM monitoring duration is crucial in cost-effectiveness terms.


Assuntos
Automonitorização da Glicemia/métodos , Diabetes Mellitus Tipo 1/sangue , Hipoglicemia/sangue , Conjuntos de Dados como Assunto , Humanos , Reprodutibilidade dos Testes
18.
Artigo em Inglês | MEDLINE | ID: mdl-32747386

RESUMO

INTRODUCTION: Many predictive models for incident type 2 diabetes (T2D) exist, but these models are not used frequently for public health management. Barriers to their application include (1) the problem of model choice (some models are applicable only to certain ethnic groups), (2) missing input variables, and (3) the lack of calibration. While (1) and (2) drives to missing predictions, (3) causes inaccurate incidence predictions. In this paper, a combined T2D risk model for public health management that addresses these three issues is developed. RESEARCH DESIGN AND METHODS: The combined T2D risk model combines eight existing predictive models by weighted average to overcome the problem of missing incidence predictions. Moreover, the combined model implements a simple recalibration strategy in which the risk scores are rescaled based on the T2D incidence in the target population. The performance of the combined model was compared with that of the eight existing models using data from two test datasets extracted from the Multi-Ethnic Study of Atherosclerosis (MESA; n=1031) and the English Longitudinal Study of Ageing (ELSA; n=4820). Metrics of discrimination, calibration, and missing incidence predictions were used for the assessment. RESULTS: The combined T2D model performed well in terms of both discrimination (concordance index: 0.83 on MESA; 0.77 on ELSA) and calibration (expected to observed event ratio: 1.00 on MESA; 1.17 on ELSA), similarly to the best-performing existing models. However, while the existing models yielded a large percentage of missing predictions (17%-45% on MESA; 63%-64% on ELSA), this was negligible with the combined model (0% on MESA, 4% on ELSA). CONCLUSIONS: Leveraging on existing literature T2D predictive models, a simple approach based on risk score rescaling and averaging was shown to provide accurate and robust incidence predictions, overcoming the problem of recalibration and missing predictions in practical application of predictive models.


Assuntos
Diabetes Mellitus Tipo 2 , Diabetes Mellitus Tipo 2/diagnóstico , Diabetes Mellitus Tipo 2/epidemiologia , Humanos , Incidência , Estudos Longitudinais , Prevalência , Saúde Pública
19.
Sensors (Basel) ; 20(14)2020 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-32664432

RESUMO

Wearable continuous glucose monitoring (CGM) sensors are revolutionizing the treatment of type 1 diabetes (T1D). These sensors provide in real-time, every 1-5 min, the current blood glucose concentration and its rate-of-change, two key pieces of information for improving the determination of exogenous insulin administration and the prediction of forthcoming adverse events, such as hypo-/hyper-glycemia. The current research in diabetes technology is putting considerable effort into developing decision support systems for patient use, which automatically analyze the patient's data collected by CGM sensors and other portable devices, as well as providing personalized recommendations about therapy adjustments to patients. Due to the large amount of data collected by patients with T1D and their variety, artificial intelligence (AI) techniques are increasingly being adopted in these decision support systems. In this paper, we review the state-of-the-art methodologies using AI and CGM sensors for decision support in advanced T1D management, including techniques for personalized insulin bolus calculation, adaptive tuning of bolus calculator parameters and glucose prediction.


Assuntos
Inteligência Artificial , Automonitorização da Glicemia , Diabetes Mellitus Tipo 1/diagnóstico , Diabetes Mellitus Tipo 1/terapia , Dispositivos Eletrônicos Vestíveis , Glicemia/análise , Gerenciamento Clínico , Humanos , Sistemas de Infusão de Insulina
20.
J Biomed Inform ; 108: 103496, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32652236

RESUMO

Developing a prognostic model for biomedical applications typically requires mapping an individual's set of covariates to a measure of the risk that he or she may experience the event to be predicted. Many scenarios, however, especially those involving adverse pathological outcomes, are better described by explicitly accounting for the timing of these events, as well as their probability. As a result, in these cases, traditional classification or ranking metrics may be inadequate to inform model evaluation or selection. To address this limitation, it is common practice to reframe the problem in the context of survival analysis, and resort, instead, to the concordance index (C-index), which summarises how well a predicted risk score describes an observed sequence of events. A practically meaningful interpretation of the C-index, however, may present several difficulties and pitfalls. Specifically, we identify two main issues: i) the C-index remains implicitly, and subtly, dependent on time, and ii) its relationship with the number of subjects whose risk was incorrectly predicted is not straightforward. Failure to consider these two aspects may introduce undesirable and unwanted biases in the evaluation process, and even result in the selection of a suboptimal model. Hence, here, we discuss ways to obtain a meaningful interpretation in spite of these difficulties. Aiming to assist experimenters regardless of their familiarity with the C-index, we start from an introductory-level presentation of its most popular estimator, highlighting the latter's temporal dependency, and suggesting how it might be correctly used to inform model selection. We also address the nonlinearity of the C-index with respect to the number of correct risk predictions, elaborating a simplified framework that may enable an easier interpretation and quantification of C-index improvements or deteriorations.


Assuntos
Prognóstico , Viés , Feminino , Humanos , Masculino , Fatores de Risco , Análise de Sobrevida
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...